Goto

Collaborating Authors

 stiefel manifold






Robust low-rank training via approximate orthonormal constraints

Neural Information Processing Systems

By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices.


On Slicing Optimality for Mutual Information Ammar Fayad

Neural Information Processing Systems

P and Q, respectively, is tight in P (X Y). Hero, 2004; Ghourchian et al., 2017), we present the outline of our argument into three steps: K P (X) is tight iff the closure of K is sequentially compact in P ( X) with respect to the weak convergence. Remark 1. W e could proceed differently by imposing stronger assumptions using the following W e briefly discuss the outline of the proof for the sake of completeness. (Loeve, 2017). The argument here depends on two important facts: 1.